8 research outputs found

    Requirement analysis and sensor specifications – First version

    Get PDF
    In this first version of the deliverable, we make the following contributions: to design the WEKIT capturing platform and the associated experience capturing API, we use a methodology for system engineering that is relevant for different domains such as: aviation, space, and medical and different professions such as: technicians, astronauts, and medical staff. Furthermore, in the methodology, we explore the system engineering process and how it can be used in the project to support the different work packages and more importantly the different deliverables that will follow the current. Next, we provide a mapping of high level functions or tasks (associated with experience transfer from expert to trainee) to low level functions such as: gaze, voice, video, body posture, hand gestures, bio-signals, fatigue levels, and location of the user in the environment. In addition, we link the low level functions to their associated sensors. Moreover, we provide a brief overview of the state-of-the-art sensors in terms of their technical specifications, possible limitations, standards, and platforms. We outline a set of recommendations pertaining to the sensors that are most relevant for the WEKIT project taking into consideration the environmental, technical and human factors described in other deliverables. We recommend Microsoft Hololens (for Augmented reality glasses), MyndBand and Neurosky chipset (for EEG), Microsoft Kinect and Lumo Lift (for body posture tracking), and Leapmotion, Intel RealSense and Myo armband (for hand gesture tracking). For eye tracking, an existing eye-tracking system can be customised to complement the augmented reality glasses, and built-in microphone of the augmented reality glasses can capture the expert’s voice. We propose a modular approach for the design of the WEKIT experience capturing system, and recommend that the capturing system should have sufficient storage or transmission capabilities. Finally, we highlight common issues associated with the use of different sensors. We consider that the set of recommendations can be useful for the design and integration of the WEKIT capturing platform and the WEKIT experience capturing API to expedite the time required to select the combination of sensors which will be used in the first prototype.WEKI

    Hardware prototype with component specification and usage description

    Get PDF
    Following on from D3.1 and the final selection of sensors, in this D3.2 report we present the first version of the experience capturing hardware prototype design and API architecturetaking into account the current limitations of the Hololens not being available until earlynext month in time for integration into the current proposed framework and design. This deliverable involved acquiring, testing and integrating the various off-the-shelf sensorsand developing a hardware/software design to connect the various devices and sensors intoa single platform. This also involved solving problems around how we would manage the additional computing power, storage and wireless streaming capabilities required for this project. In this first version of the deliverable we propose an initial design for the hardware andarchitecture and have focussed initially on providing a bare-bones prototype of the SensorProcessing Unit (SPU) which involved combining a micro-pc with various sensors to test theflow of data, capabilities of the system and possible connection methods to the device forthe various sensors.The first hardware prototype does not yet include the wearable element, however, ourselection of the sensors and micro-pc takes into consideration the desire to integrate thehardware using 3D printing or by creating add-ons for the Hololens to avoid a chunky ormulti-device hardware solution; D5.2 (M15) will be providing the final design of the fashion,wearability, and comfort of the device taking into consideration the previously completeddesk based research (D5.1, M15). We have provided a mock-up in the Prototype and UsageDescriptions section below of what the final prototype is expected to look like.The separation of the Sensor Processing Unit (SPU) allows for the development of thehardware and software to continue in the absence of the Hololens, but also ensures theplatform can be used standalone or is easily adapted to other Smart Glasses. It does not addany further strain on the in-built computers by keeping the sensor processing independent.As described in D3.2 although there are other AR alternatives already available in the market, they all fall short in terms of the required spec and functionality. A decision was then made to work with the Hololens as it would be the best in class and although notavailable has been tested by key partners and will be available in time for the nextdeliverable.Within this deliverable we also provide an initial mock-up of the hardware design and alsoa sample of the API and links to device APIs

    Hardware prototype with component specification and usage description

    No full text
    Following on from D3.1 and the final selection of sensors, in this D3.2 report we present the first version of the experience capturing hardware prototype design and API architecturetaking into account the current limitations of the Hololens not being available until earlynext month in time for integration into the current proposed framework and design. This deliverable involved acquiring, testing and integrating the various off-the-shelf sensorsand developing a hardware/software design to connect the various devices and sensors intoa single platform. This also involved solving problems around how we would manage the additional computing power, storage and wireless streaming capabilities required for this project. In this first version of the deliverable we propose an initial design for the hardware andarchitecture and have focussed initially on providing a bare-bones prototype of the SensorProcessing Unit (SPU) which involved combining a micro-pc with various sensors to test theflow of data, capabilities of the system and possible connection methods to the device forthe various sensors.The first hardware prototype does not yet include the wearable element, however, ourselection of the sensors and micro-pc takes into consideration the desire to integrate thehardware using 3D printing or by creating add-ons for the Hololens to avoid a chunky ormulti-device hardware solution; D5.2 (M15) will be providing the final design of the fashion,wearability, and comfort of the device taking into consideration the previously completeddesk based research (D5.1, M15). We have provided a mock-up in the Prototype and UsageDescriptions section below of what the final prototype is expected to look like.The separation of the Sensor Processing Unit (SPU) allows for the development of thehardware and software to continue in the absence of the Hololens, but also ensures theplatform can be used standalone or is easily adapted to other Smart Glasses. It does not addany further strain on the in-built computers by keeping the sensor processing independent.As described in D3.2 although there are other AR alternatives already available in the market, they all fall short in terms of the required spec and functionality. A decision was then made to work with the Hololens as it would be the best in class and although notavailable has been tested by key partners and will be available in time for the nextdeliverable.Within this deliverable we also provide an initial mock-up of the hardware design and alsoa sample of the API and links to device APIs

    Software Prototype with Sensor Fusion API Specification and Usage Description: WEKIT project deliverable D3.3

    Get PDF
    This deliverable reports on the components of the first functional prototype of the WEKIT sensor fusion API and the experiences made with it. The development of these components is based on previous work on the WEKIT framework and methodology, requirements and scenarios, as well as technological selections and limitations. The deliverable specifies the key interfaces between the software component and the hardware, the backend infrastructure, and the front-end application modules. Furthermore, it contains usage recommendations. This deliverable represents the first of two iterations. The second iteration is due in month 27
    corecore